The corpus I chose to be represented in my final assignment is a comparison of rock music from the 80s, 90s and 2000s. The rock genre is my favourite across the decades and looking into what ties them together and sets them apart is interesting to me. The natural comparison point in this corpus would be the genre, separated by the decade, specifically the 80s, 90s, and early 2000s.
Through this corpus, I expect the themes between decades to be quite similar, as no matter the year, the rock genre stands for certain messages that won’t change, such as themes of rebellion, social commentary, and personal struggles across all three decades. However, I expect the musical style to differ between the decades, making the 90s have more energy than the 80s, and the 2000s even more than the 90s.
Overall, I don’t see any limitations or gaps in the chosen corpus. If I were to potentially expand it, I would expect gaps in lesser known artists, or covers of known songs, but in the already selected corpus with limited tracks, I didn’t face any obstacles with gaps in Spotify.
Some typical tracks from my corpus include “Sweet Child o’ Mine” by Guns N’ Roses, “Smells Like Teen Spirit” by Nirvana, or “How You Remind Me” by Nickelback, as these are considered to be perfect representation of the genre at the stage of their decade. On the other hand, some atypical tracks would be “Wonderwall” by Oasis or “Feel Good Inc.” by Gorillaz seeing as these are tracks that deviate from the mainstream trends and offer a more unique or experimental approach to the genre.
The following board will show you the tempogram analysis if all the number 1 songs in this corpus. This includes but is not limited to number 1 tempo, number 1 energy, number 1 speechiness, and much more!
# A tibble: 1 × 13
analyzer_version duration end_of_fade_in start_of_fade_out tempo_confidence
<chr> <dbl> <dbl> <dbl> <dbl>
1 4.0.0 255. 0.528 240. 0
# ℹ 8 more variables: time_signature_confidence <dbl>, key_confidence <dbl>,
# mode_confidence <dbl>, bars <list>, beats <list>, tatums <list>,
# sections <list>, segments <list>
According to Spotify’s API, the song with the highest tempo variable in my corpus is “Back in Black” by ACDC.You can see on the tempogram that it’s pretty steady at about 90 BPM, but the yellow spreads accross at about 210 seconds. At this time, you can hear the guitar come in for a segment as the lead instrument, changing the tempo for a little bit, as can be visualised on the tempogram.
# A tibble: 1 × 13
analyzer_version duration end_of_fade_in start_of_fade_out tempo_confidence
<chr> <dbl> <dbl> <dbl> <dbl>
1 4.0.0 176. 2.91 176. 0.441
# ℹ 8 more variables: time_signature_confidence <dbl>, key_confidence <dbl>,
# mode_confidence <dbl>, bars <list>, beats <list>, tatums <list>,
# sections <list>, segments <list>
“American Idiot” by Greenday is not only the most energetic in my corpus, but it’s also the loudest. Due to it being an outlier in not one, but two important variables, I expected this to manifest in some way in the tempogram. However, aside from a couple of minor bumps, the tempo stays stable at around 95 BPM throughout the song.
# A tibble: 1 × 13
analyzer_version duration end_of_fade_in start_of_fade_out tempo_confidence
<chr> <dbl> <dbl> <dbl> <dbl>
1 4.0.0 215. 0 210. 0.773
# ℹ 8 more variables: time_signature_confidence <dbl>, key_confidence <dbl>,
# mode_confidence <dbl>, bars <list>, beats <list>, tatums <list>,
# sections <list>, segments <list>
Queen truly is Rhythm Royalty by making the most danceable song in my corpus, “Another One Bites The Dust”. With an average of about 110 BPM, we see some movement away from the average at about 80 seconds, lasting 20-25 seconds. At this point in the song, it’s the second time the chorus comes around and the peak of the song, which is when the iconic beat of “Another One Bites The Dust” come up and switches up the tempo. The chorus (and this beat) come around two more times in the song, the first at around 40 seconds, which is when we see the first slight movement of yellow in the tempogram, and the third at about 150 seconds, we see bigger movement, but not as much as the peak of the song.
# A tibble: 1 × 13
analyzer_version duration end_of_fade_in start_of_fade_out tempo_confidence
<chr> <dbl> <dbl> <dbl> <dbl>
1 4.0.0 175. 0 172. 0.356
# ℹ 8 more variables: time_signature_confidence <dbl>, key_confidence <dbl>,
# mode_confidence <dbl>, bars <list>, beats <list>, tatums <list>,
# sections <list>, segments <list>
Joan Jett & The Blackhearts must have been rocking with joy while making “I Love Rock ’N Roll” because that song has the highest valence found in my corpus. The song doesn’t have a stable tempo, but it mostly ranges between 80-85 BPM to 100 BPM. However, there are some interesting outlier moments that are interesting to look into. The first stands at around 14 seconds, where we see a spread of the yellow, rather than the normal range. When listening to the song, 14 seconds is where we move from the introduction beat to the guitar riff that really starts the song. Another outlier moment we can see on the tempogram is at 100 seconds, which in the song corresponds to the start of a segment of the song with a focal point on the guitar. The last outlier moment worth noting is at 125 seconds, which, in the song, indicates the start of the a cappella section in the “I Love Rock ’N Roll”.
# A tibble: 1 × 13
analyzer_version duration end_of_fade_in start_of_fade_out tempo_confidence
<chr> <dbl> <dbl> <dbl> <dbl>
1 4.0.0 240. 0 240. 0.169
# ℹ 8 more variables: time_signature_confidence <dbl>, key_confidence <dbl>,
# mode_confidence <dbl>, bars <list>, beats <list>, tatums <list>,
# sections <list>, segments <list>
Another variable the Spotify API measures is the “liveness” of the song, where they measure the presence of an audience. The one with the highest liveness in my corpus is “Galway Girl” by Mundy, which makes sense since it’s the only live recording in the corpus. As you can see, the tempogram gives us a few moments to note that move away from the general 95 BPM. At 100 seconds, you can see on the tempogram the yellow line suddenly drop, that’s because at that time, the beat slows down for the singer to speak to his audience at the concert, and for them to sing. The yellow line then goes back up to it’s original sport at 130 seconds, when the beat picks up and the singer continues singing. A similar phenomenon happens where after the beat drops to a quiet moment in the song which cna be seen by the second drop of the yellow line, it picks up to a very high energy high BPM at 190 seconds, which is visualised by the yellow being spread all over the tempogram.
# A tibble: 1 × 13
analyzer_version duration end_of_fade_in start_of_fade_out tempo_confidence
<chr> <dbl> <dbl> <dbl> <dbl>
1 4.0.0 223. 0.673 218. 0.84
# ℹ 8 more variables: time_signature_confidence <dbl>, key_confidence <dbl>,
# mode_confidence <dbl>, bars <list>, beats <list>, tatums <list>,
# sections <list>, segments <list>
A song Spotify deemed had the highest speechiness in my corpus is “Feel Good Inc” by Gorillaz. From the tempogram, we can see the tempo is pretty stable at 140 BPM. We see some movement with the yellow spreading out at around 135 seconds, which is manifested due to a new section of the song where the beat slows down and the instruments sound softer, which leads up to a a slow build up in the song.
# A tibble: 1 × 13
analyzer_version duration end_of_fade_in start_of_fade_out tempo_confidence
<chr> <dbl> <dbl> <dbl> <dbl>
1 4.0.0 181. 0.175 181. 0.845
# ℹ 8 more variables: time_signature_confidence <dbl>, key_confidence <dbl>,
# mode_confidence <dbl>, bars <list>, beats <list>, tatums <list>,
# sections <list>, segments <list>
Following up on the song with the highest speechiness, “Need You Tonight” by INXS had the highest instrumentalness, according to the Spotify API. My expectations of the tempogram included some movement of the tempo due to the different “feel” of different segments of the song. In reality, the tempogram shows a steady tempo throughout the whole song at around 110 BPM. After seeing the tempogram and listening ot the song again, I see that this makes sense because although the “feel” of some sections is different, the background instruments and beat stays the same throughout the song.
# A tibble: 1 × 13
analyzer_version duration end_of_fade_in start_of_fade_out tempo_confidence
<chr> <dbl> <dbl> <dbl> <dbl>
1 4.0.0 263. 0.115 244. 0.037
# ℹ 8 more variables: time_signature_confidence <dbl>, key_confidence <dbl>,
# mode_confidence <dbl>, bars <list>, beats <list>, tatums <list>,
# sections <list>, segments <list>
Last but certainly not least, this song was the corpus’ most acoustic song, “My Immortal” by Evanescence. As you can see on the tempogram, the yellow is spread all over the graph, making it harder than the rest to analyse the tempo. Nonetheless, we can see that the average tempo of the song rests between 140 and 160 BPM, varying in this bracket. We can see a significant drop in empo at around 80 seconds, which is when the singer takes a breath and the song quiets down for a few seconds.
# A tibble: 1 × 13
analyzer_version duration end_of_fade_in start_of_fade_out tempo_confidence
<chr> <dbl> <dbl> <dbl> <dbl>
1 4.0.0 249. 5.94 234. 0.797
# ℹ 8 more variables: time_signature_confidence <dbl>, key_confidence <dbl>,
# mode_confidence <dbl>, bars <list>, beats <list>, tatums <list>,
# sections <list>, segments <list>
“Livin’ On A Prayer” by Bon Jovi is not only one of the most famous songs the band has made, but it is one of the most famous songs of all time. It features a distinctive chord progression that is both recognizable and iconic in the realm of rock music.
“November Rain” by Guns’N’Roses is a song who’s keys are so distinct they can be indetified anywhere, anytime, by anyone. “Novemeber Rain” incorporates various key changes and modulations throughout the song, and analyzing the key changes in a keygram can reveal the song’s overall tonal journey, highlighting key moments and transitions that contribute to its emotional impact.
Using this visualisation, we can see the most popular keys used in each decade of rock music. An interesting takeway from this is while they stay similar throughout the decade, there are slight differences we can spot.
With this visualisation, we can compare the songs in the corpus from all three decades in regards to their tempo.
# A tibble: 1 × 13
analyzer_version duration end_of_fade_in start_of_fade_out tempo_confidence
<chr> <dbl> <dbl> <dbl> <dbl>
1 4.0.0 176. 2.91 176. 0.441
# ℹ 8 more variables: time_signature_confidence <dbl>, key_confidence <dbl>,
# mode_confidence <dbl>, bars <list>, beats <list>, tatums <list>,
# sections <list>, segments <list>
Spotify’s API looks into multiple variables to analyse their tracks. When looking into what an important outlier would be to visualize as a cepstrogram, two variables stood out: loudness and energy. It just so happens that one sing in my corpus was both the loudest and had the most energy, and that was “American Idiot” by Greenday.This can be visually seen through the cepstrogram by how high the magnitude is in c01, but mainly in c02, throughout the whole songs.
# A tibble: 1 × 13
analyzer_version duration end_of_fade_in start_of_fade_out tempo_confidence
<chr> <dbl> <dbl> <dbl> <dbl>
1 4.0.0 259. 0.196 251. 0.14
# ℹ 8 more variables: time_signature_confidence <dbl>, key_confidence <dbl>,
# mode_confidence <dbl>, bars <list>, beats <list>, tatums <list>,
# sections <list>, segments <list>
When introducing my corpus, I detailed what may be considered typical and atypical songs from each decade. From the 90s, what can be considered an atypical song of the rock genre would be “Wonderwall” by Oasis. Here, you can see the self similarity matrix based on timbre for the song mentioned. You can see an obvious point of comparison just after the 100 second mark. When listening to the song, this is a point right after the chorus. I believe this stands out when looking at timbre features is because of the isolation of instruments in this segments. You can clearly separate the drums, guitar, and vocals here.
# A tibble: 1 × 13
analyzer_version duration end_of_fade_in start_of_fade_out tempo_confidence
<chr> <dbl> <dbl> <dbl> <dbl>
1 4.0.0 259. 0.196 251. 0.14
# ℹ 8 more variables: time_signature_confidence <dbl>, key_confidence <dbl>,
# mode_confidence <dbl>, bars <list>, beats <list>, tatums <list>,
# sections <list>, segments <list>
Similarly, when looking at the chroma features through a self similarity matrix, this point of comparison also stands out due to the similarities of the pitches.
# A tibble: 1 × 13
analyzer_version duration end_of_fade_in start_of_fade_out tempo_confidence
<chr> <dbl> <dbl> <dbl> <dbl>
1 4.0.0 302. 1.02 287. 0.8
# ℹ 8 more variables: time_signature_confidence <dbl>, key_confidence <dbl>,
# mode_confidence <dbl>, bars <list>, beats <list>, tatums <list>,
# sections <list>, segments <list>
We discussed what an atypical rock song form the 90s looks like, what about a typical one? For example “Smells Like Teen Spirit” by Nirvana. When looking at the self similarity matrix based on timbre features of the song. You can see the darkest part of the grid is between almost the 160 second mark and the 210 second mark. This is because there is an instrumental section where the instruments clearly stand out and you can dustinguish the differen instruments playing in this segment. On the other hand, when you compare this segment to the one from roughly 25 seconds to 60 seconds, you can see that it is yellow and quite light colored on the matrix, meaning there is little similarity between both segments. If you listen to this passage in the song, it was quite low energy compared the instrument solo later on, which explains the light colors on the matrix.
# A tibble: 1 × 13
analyzer_version duration end_of_fade_in start_of_fade_out tempo_confidence
<chr> <dbl> <dbl> <dbl> <dbl>
1 4.0.0 302. 1.02 287. 0.8
# ℹ 8 more variables: time_signature_confidence <dbl>, key_confidence <dbl>,
# mode_confidence <dbl>, bars <list>, beats <list>, tatums <list>,
# sections <list>, segments <list>
When looking at the same song’s self similarity matrix but based on chroma features this time, you see that more or less, the same passages light up, however the colored “squares” are vary more in this matrix and that’s because instead of just comparing the “feel” of the segment, it compares he specific pitch, also known as chroma.
What we can take away from the analysis of both of these songs, is that it is much easier to look into the differences and similarities of the timbre and chroma of a typical song from the rock genre. When looking at the visualisations of “Smells Like Teen Spirit” by Nirvana, there are certain aspects that stand out to the eye and make it quite easy to analyse. On the other hand, when looking at the visualisations of “Wonderwall” by Oasis, it is much more difficult to find moments that stand out. In my opinion, this is because in the typical song, there is a variation of timbre and chroma, the segments range from high energy and loundness to low energy. In the atypical song, the chroma and pitch of the segments is more or less the same throughout the song, and the same goes for timbre.
The scatter plot I made is divided into three sections, one for each decade, with the Danceability variable on the y-axis and the Valence variable on te x-axis. I was interested to see if the valence of a song made it more danceable. Other variables measured in this scatter plot are Track Popularity, measured by color, where a track is considered “Very” popular if it measured superior to 75, and “Little” popular if it measured inferior to 75. I also measured loudness, which I found interesting to look into for rock music, and that variable is measured by size. From the plot, I can see that 00s rock is louder than both other decades, which was interesting to find out, but not surprising. Valence and Dancebaility don’t affect track popularity, and if valence and dasnecability have an effect on one another, it can’t be seen from a quick visualisation and would need to be looked into further.
# A tibble: 1 × 13
analyzer_version duration end_of_fade_in start_of_fade_out tempo_confidence
<chr> <dbl> <dbl> <dbl> <dbl>
1 4.0.0 175. 0 172. 0.356
# ℹ 8 more variables: time_signature_confidence <dbl>, key_confidence <dbl>,
# mode_confidence <dbl>, bars <list>, beats <list>, tatums <list>,
# sections <list>, segments <list>
Out of 90 songs through 3 decade, this song, “I Love Rock ’N Roll” by Joan Jett & the Blackhearts, had the most valence. If you really focus, the 80s generally had happier rock music than the other two decades we’re looking into. So, the happied rock song of the 80s, 90s, and the 00s, is no presented as a chromogram.